120 research outputs found

    Eye tracking observers during color image evaluation tasks

    Get PDF
    This thesis investigated eye movement behavior of subjects during image-quality evaluation and chromatic adaptation tasks. Specifically, the objectives focused on learning where people center their attention during color preference judgments, examining the differences between paired comparison, rank order, and graphical rating tasks, and determining what strategies are adopted when selecting or adjusting achromatic regions on a soft-copy display. In judging the most preferred image, measures of fixation duration showed that observers spend about 4 seconds per image in the rank order task, 1.8 seconds per image in the paired comparison task, and 3.5 seconds per image in the graphical rating task. Spatial distributions of fixations across the three tasks were highly correlated in four of the five images. Peak areas of attention gravitated toward faces and semantic features. Introspective report was not always consistent with where people foveated, implying broader regions of importance than eye movement plots. Psychophysical results across these tasks generated similar, but not identical, scale values for three of the five images. The differences in scales are likely related to statistical treatment and image confusability, rather than eye movement behavior. In adjusting patches to appear achromatic, about 95% of the total adjustment time was spent fixating only on the patch. This result shows that even when participants are free to move their eyes in this kind of task, central adjustment patches can discourage normal image viewing behavior. When subjects did look around (less than 5% of the time), they did so early during the trial. Foveations were consistently directed toward semantic features, not shadows or achromatic surfaces. This result shows that viewers do not seek out near-neutral objects to ensure that their patch adjustments appear achromatic in the context of the scene. They also do not scan the image in order to adapt to a gray world average. As demonstrated in other studies, the mean chromaticity of the image influenced observers\u27 patch adjustments. Adaptation to the D93 white point was about 65% complete from D65. This result agrees reasonably with the time course of adaptation occurring over a 20 to 30 second exposure to the adapting illuminant. In selecting the most achromatic regions in the image, viewers spent 60% of the time scanning the scene. Unlike the achromatic patch adjustment task, foveations were consistently directed toward achromatic regions and near-neutral objects as would be expected. Eye movement records show behavior similar to what is expected from a visual search task

    The wearable eyetracker: a tool for the study of high-level visual tasks

    Get PDF
    Even as the sophistication and power of computer-based vision systems is growing, the human visual system remains unsurpassed in many visual tasks. Vision delivers a rich representation of the environment without conscious effort, but the perception of a high resolution, wide field-of-view scene is largely an illusion made possible by the concentration of visual acuity near the center of gaze, coupled with a large, low-acuity periphery. Human observers are typically unaware of this extreme anisotropy because the visual system is equipped with a sophisticated oculomotor system that rapidly moves the eyes to sample the retinal image several times every second. The eye movements are programmed and executed at a level below conscious awareness, so self-report is an unreliable way to learn how trained observers perform complex visual tasks. Eye movements in controlled laboratory conditions have been studied extensively, but their utility as a metric of visual performance in real world, complex tasks, offers a powerful, under-utilized tool for the study of high-level visual processes. Recorded gaze patterns provide externally-visible markers to the spatial and temporal deployment of attention to objects and actions. In order to study vision in the real world, we have developed a self-contained, wearable eyetracker for monitoring complex tasks. The eyetracker can be worn for an extended period of time, does not restrict natural movements or behavior, and preserves peripheral vision. The wearable eyetracker can be used to study performance in a range of visual tasks, from situational awareness to directed visual search

    Economical Fabrication of Thick-Section Ceramic Matrix Composites

    Get PDF
    A method was developed for producing thick-section [>2 in. (approx.5 cm)], continuous fiber-reinforced ceramic matrix composites (CMCs). Ultramet-modified fiber interface coating and melt infiltration processing, developed previously for thin-section components, were used for the fabrication of CMCs that were an order of magnitude greater in thickness [up to 2.5 in. (approx.6.4 cm)]. Melt processing first involves infiltration of a fiber preform with the desired interface coating, and then with carbon to partially densify the preform. A molten refractory metal is then infiltrated and reacts with the excess carbon to form the carbide matrix without damaging the fiber reinforcement. Infiltration occurs from the inside out as the molten metal fills virtually all the available void space. Densification to 41 ksi (approx. 283 MPa) flexural strength

    Portable Eyetracking: A Study of Natural Eye Movements

    Get PDF
    Visual perception, operating below conscious awareness, effortlessly provides the experience of a rich representation of the environment, continuous in space and time. Conscious visual perception is made possible by the \u27foveal compromise,\u27 the combination of the high-acuity fovea and a sophisticated suite of eye movements. Our illusory visual experience cannot be understood by introspection, but monitoring eye movements lets us probe the processes of visual perception. Four tasks representing a wide range of complexity were used to explore visual perception; image quality judgments, map reading, model building, and hand-washing. Very short fixation durations were observed in all tasks, some as short as 33 msec. While some tasks showed little variation in eye movement metrics, differences in eye movement patterns and high-level strategies were observed in the model building and hand-washing tasks. Performance in the hand-washing task revealed a new type of eye movement. \u27Planful\u27 eye movements were made to objects well in advance of a subject\u27s interaction with the object. Often occurring in the middle of another task, they provide \u27overlapping\u27 temporal information about the environment providing a mechanism to produce our conscious visual experience

    Blockade of HIV-1 Infection of New World Monkey Cells Occurs Primarily at the Stage of Virus Entry

    Get PDF
    HIV-1 naturally infects chimpanzees and humans, but does not infect Old World monkeys because of replication blocks that occur after virus entry into the cell. To understand the species-specific restrictions operating on HIV-1 infection, the ability of HIV-1 to infect the cells of New World monkeys was examined. Primary cells derived from common marmosets and squirrel monkeys support every phase of HIV-1 replication with the exception of virus entry. Efficient HIV-1 entry typically requires binding of the viral envelope glycoproteins and host cell receptors, CD4 and either CCR5 or CXCR4 chemokine receptors. HIV-1 did not detectably bind or utilize squirrel monkey CD4 for entry, and marmoset CD4 was also very inefficient compared with human CD4. A marmoset CD4 variant, in which residues 48 and 59 were altered to the amino acids found in human CD4, supported HIV-1 entry efficiently. The CXCR4 molecules of both marmosets and squirrel monkeys supported HIV-1 infection, but the CCR5 proteins of both species were only marginally functional. These results demonstrate that the CD4 and CCR5 proteins of New World monkeys represent the major restriction against HIV-1 replication in these primates. Directed adaptation of the HIV-1 envelope glycoproteins to common marmoset receptors might allow the development of New World monkey models of HIV-1 infection

    Using Human Observer Eye Movements in Automatic Image Classifiers

    Get PDF
    We explore the way in which people look at images of different semantic categories (e.g., handshake, landscape), and directly relate those results to computational approaches for automatic image classification. Our hypothesis is that the eye movements of human observers differ for images of different semantic categories, and that this information can be effectively used in automatic content-based classifiers. First, we present eye tracking experiments that show the variations in eye movements (i.e., fixations and saccades) across different individuals for images of 5 different categories: handshakes (two people shaking hands), crowd (cluttered scenes with many people), landscapes (nature scenes without people), main object in uncluttered background (e.g., an airplane flying), and miscellaneous (people and still lives). The eye tracking results suggest that similar viewing patterns occur when different subjects view different images in the same semantic category. Using these results, we examine how empirical data obtained from eye tracking experiments across different semantic categories can be integrated with existing computational frameworks, or used to construct new ones. In particular, we examine the Visual Apprentice, a system in which image classifiers are learned (using machine learning) from user input as the user defines a multiple level object definition hierarchy based on an object and its parts (scene, object, object-part, perceptual area, region), and labels examples for specific classes (e.g., handshake). The resulting classifiers are applied to automatically classify new images (e.g., as handshake/non-handshake). Although many eye tracking experiments have been performed, to our knowledge, this is the first study that specifically compares eye movements across categories, and that links categoryspecific eye tracking results to automatic image classification techniques
    • …
    corecore